7 research outputs found

    Robust Face Recognition With Kernelized Locality-Sensitive Group Sparsity Representation

    Get PDF
    In this paper, a novel joint sparse representation method is proposed for robust face recognition. We embed both group sparsity and kernelized locality-sensitive constraints into the framework of sparse representation. The group sparsity constraint is designed to utilize the grouped structure information in the training data. The local similarity between test and training data is measured in the kernel space instead of the Euclidian space. As a result, the embedded nonlinear information can be effectively captured, leading to a more discriminative representation. We show that, by integrating the kernelized local-sensitivity constraint and the group sparsity constraint, the embedded structure information can be better explored, and significant performance improvement can be achieved. On the one hand, experiments on the ORL, AR, extended Yale B, and LFW data sets verify the superiority of our method. On the other hand, experiments on two unconstrained data sets, the LFW and the IJB-A, show that the utilization of sparsity can improve recognition performance, especially on the data sets with large pose variation

    Dense Invariant Feature Based Support Vector Ranking for Cross-Camera Person Re-identification

    Get PDF
    Recently, support vector ranking has been adopted to address the challenging person re-identification problem. However, the ranking model based on ordinary global features cannot well represent the significant variation of pose and viewpoint across camera views. To address this issue, a novel ranking method which fuses the dense invariant features is proposed in this paper to model the variation of images across camera views. An optimal space for ranking is learned by simultaneously maximizing the margin and minimizing the error on the fused features. The proposed method significantly outperforms the original support vector ranking algorithm due to the invariance of the dense invariant features, the fusion of the bidirectional features and the adaptive adjustment of parameters. Experimental results demonstrate that the proposed method is competitive with state-of-the-art methods on two challenging datasets, showing its potential for real-world person re-identification

    Dense invariant feature based support vector ranking for person re-identification

    No full text
    Recently, support vector ranking has been adopted to address the challenging person re-identification problem. However, the ranking model based on ordinary global features cannot represent the significant variation of pose and viewpoint across camera views. Thus, a novel ranking method which fuses the dense invariant features is proposed in this paper to model the variation of images across camera views. By maximizing the margin and minimizing the error score for the fused features, an optimal space for ranking has been learned. Due to the invariance of the dense invariant features and the fusion of the bidirectional features, the proposed method significantly outperforms the original support vector ranking algorithm and is competitive with state-of-the-art techniques on two challenging datasets, showing its potential for real-world person re-identification

    YOLO-DFAN: Effective High-Altitude Safety Belt Detection Network

    No full text
    This paper proposes the You Only Look Once (YOLO) dependency fusing attention network (DFAN) detection algorithm, improved based on the lightweight network YOLOv4-tiny. It combines the advantages of fast speed of traditional lightweight networks and high precision of traditional heavyweight networks, so it is very suitable for the real-time detection of high-altitude safety belts in embedded equipment. In response to the difficulty of extracting the features of an object with a low effective pixel ratio—which is an object with a low ratio of actual area to detection anchor area in the YOLOv4-tiny network—we make three major improvements to the baseline network: The first improvement is introducing the atrous spatial pyramid pooling network after CSPDarkNet-tiny extracts features. The second is to propose the DFAN, while the third is to introduce the path aggregation network (PANet) to replace the feature pyramid network (FPN) of the original network and fuse it with the DFAN. According to the experimental results in the high-altitude safety belt dataset, YOLO-DFAN improves the accuracy by 5.13% compared with the original network, and its detection speed meets the real-time demand. The algorithm also exhibits a good improvement on the Pascal voc07+12 dataset

    Image-to-class distance ratio: A feature filtering metric for image classification

    No full text
    A growing number of complex features are designed to address various problems in computer vision. Feature selection is an efficient way to reduce the heavy computation cost caused by complex and lengthy features. The total features are substituted by a discriminative subset selected according to a criterion to reduce the dimensionality. In current feature selection methods, most of the subset selection metrics evaluate the features according to their relevance. However, the discriminative power of a feature subset is not simply determined by the relevance, especially in the case of complex features. In this paper, a new feature subset selection metric, the image-to-class distance ratio based on Euclidean distance, is proposed to select a subset in which the average intra-class Euclidean distance is minimized and average inter-class Euclidean distance is maximized, leading to good classification performance. In addition, the search space for feature selection problems with large-size complex features is huge, which makes many heuristic search methods infeasible. A Particle Swarm Optimization (PSO) based subset search algorithm is introduced to search the best subset according to the proposed metric in the huge search space. Experimental results show that the proposed subset search algorithm (I2CDRPSO) is effective. The feature subset selected by the metric has better performance than other feature selection metric in several classification methods

    Part-Based Attribute-Aware Network for Person Re-Identification

    No full text
    Despite the rapid progress over the past decade, person re-identification (reID) remains a challenging task due to the fact that discriminative features underlying different granularities are easily affected by illumination and camera-view variation. Most deep learning-based algorithms for reID extract global embedding as the representation of the pedestrian from the convolutional neural network. Considering that person attributes are robust and informative to identify pedestrians. This paper proposes a multi-branch model, namely part-based attribute-aware network (PAAN), to leverage both person reID and attribute performance, which not only utilizes ID label visible to the whole image but also utilizes attribute information. In order to learn discriminative and robust global representation which is invariant to the fact mentioned above, we resort to global and local person attributes to build global and local representation, respectively, utilizing our proposed layered partition strategy. Our goal is to exploit global or local semantic information to guide the optimization of global representation. Besides, in order to enhance the global representation, we design a semantic bridge replenishing mid-level semantic information for the final representation, which contains high-level semantic information. The extensive experiments are conducted to demonstrate the effectiveness of our proposed approach on two large-scale person re-identification datasets including Market-1501 and DukeMTMC-reID, and our approach achieves rank-1 of 92.40% on Market-1501 and 82.59% on DukeMTMC-reID showing strong competitiveness among the start of the art
    corecore